Supervised Deep-Learning (DL)-based reconstruction algorithms have shown state-of-the-art results for highly-undersampled dynamic Magnetic Resonance Imaging (MRI) reconstruction. However, the requirement of excessive high-quality ground-truth data hinders their applications due to the generalization problem. Recently, Implicit Neural Representation (INR) has appeared as a powerful DL-based tool for solving the inverse problem by characterizing the attributes of a signal as a continuous function of corresponding coordinates in an unsupervised manner. In this work, we proposed an INR-based method to improve dynamic MRI reconstruction from highly undersampled k-space data, which only takes spatiotemporal coordinates as inputs. Specifically, the proposed INR represents the dynamic MRI images as an implicit function and encodes them into neural networks. The weights of the network are learned from sparsely-acquired (k, t)-space data itself only, without external training datasets or prior images. Benefiting from the strong implicit continuity regularization of INR together with explicit regularization for low-rankness and sparsity, our proposed method outperforms the compared scan-specific methods at various acceleration factors. E.g., experiments on retrospective cardiac cine datasets show an improvement of 5.5 ~ 7.1 dB in PSNR for extremely high accelerations (up to 41.6-fold). The high-quality and inner continuity of the images provided by INR has great potential to further improve the spatiotemporal resolution of dynamic MRI, without the need of any training data.
translated by 谷歌翻译
Time series anomaly detection strives to uncover potential abnormal behaviors and patterns from temporal data, and has fundamental significance in diverse application scenarios. Constructing an effective detection model usually requires adequate training data stored in a centralized manner, however, this requirement sometimes could not be satisfied in realistic scenarios. As a prevailing approach to address the above problem, federated learning has demonstrated its power to cooperate with the distributed data available while protecting the privacy of data providers. However, it is still unclear that how existing time series anomaly detection algorithms perform with decentralized data storage and privacy protection through federated learning. To study this, we conduct a federated time series anomaly detection benchmark, named FedTADBench, which involves five representative time series anomaly detection algorithms and four popular federated learning methods. We would like to answer the following questions: (1)How is the performance of time series anomaly detection algorithms when meeting federated learning? (2) Which federated learning method is the most appropriate one for time series anomaly detection? (3) How do federated time series anomaly detection approaches perform on different partitions of data in clients? Numbers of results as well as corresponding analysis are provided from extensive experiments with various settings. The source code of our benchmark is publicly available at https://github.com/fanxingliu2020/FedTADBench.
translated by 谷歌翻译
Structure-guided image completion aims to inpaint a local region of an image according to an input guidance map from users. While such a task enables many practical applications for interactive editing, existing methods often struggle to hallucinate realistic object instances in complex natural scenes. Such a limitation is partially due to the lack of semantic-level constraints inside the hole region as well as the lack of a mechanism to enforce realistic object generation. In this work, we propose a learning paradigm that consists of semantic discriminators and object-level discriminators for improving the generation of complex semantics and objects. Specifically, the semantic discriminators leverage pretrained visual features to improve the realism of the generated visual concepts. Moreover, the object-level discriminators take aligned instances as inputs to enforce the realism of individual objects. Our proposed scheme significantly improves the generation quality and achieves state-of-the-art results on various tasks, including segmentation-guided completion, edge-guided manipulation and panoptically-guided manipulation on Places2 datasets. Furthermore, our trained model is flexible and can support multiple editing use cases, such as object insertion, replacement, removal and standard inpainting. In particular, our trained model combined with a novel automatic image completion pipeline achieves state-of-the-art results on the standard inpainting task.
translated by 谷歌翻译
Weakly supervised video anomaly detection aims to identify abnormal events in videos using only video-level labels. Recently, two-stage self-training methods have achieved significant improvements by self-generating pseudo labels and self-refining anomaly scores with these labels. As the pseudo labels play a crucial role, we propose an enhancement framework by exploiting completeness and uncertainty properties for effective self-training. Specifically, we first design a multi-head classification module (each head serves as a classifier) with a diversity loss to maximize the distribution differences of predicted pseudo labels across heads. This encourages the generated pseudo labels to cover as many abnormal events as possible. We then devise an iterative uncertainty pseudo label refinement strategy, which improves not only the initial pseudo labels but also the updated ones obtained by the desired classifier in the second stage. Extensive experimental results demonstrate the proposed method performs favorably against state-of-the-art approaches on the UCF-Crime, TAD, and XD-Violence benchmark datasets.
translated by 谷歌翻译
Change detection (CD) is to decouple object changes (i.e., object missing or appearing) from background changes (i.e., environment variations) like light and season variations in two images captured in the same scene over a long time span, presenting critical applications in disaster management, urban development, etc. In particular, the endless patterns of background changes require detectors to have a high generalization against unseen environment variations, making this task significantly challenging. Recent deep learning-based methods develop novel network architectures or optimization strategies with paired-training examples, which do not handle the generalization issue explicitly and require huge manual pixel-level annotation efforts. In this work, for the first attempt in the CD community, we study the generalization issue of CD from the perspective of data augmentation and develop a novel weakly supervised training algorithm that only needs image-level labels. Different from general augmentation techniques for classification, we propose the background-mixed augmentation that is specifically designed for change detection by augmenting examples under the guidance of a set of background-changing images and letting deep CD models see diverse environment variations. Moreover, we propose the augmented & real data consistency loss that encourages the generalization increase significantly. Our method as a general framework can enhance a wide range of existing deep learning-based detectors. We conduct extensive experiments in two public datasets and enhance four state-of-the-art methods, demonstrating the advantages of our method. We release the code at https://github.com/tsingqguo/bgmix.
translated by 谷歌翻译
Neural Radiance Field (NeRF) has widely received attention in Sparse-View Computed Tomography (SVCT) reconstruction tasks as a self-supervised deep learning framework. NeRF-based SVCT methods represent the desired CT image as a continuous function of spatial coordinates and train a Multi-Layer Perceptron (MLP) to learn the function by minimizing loss on the SV sinogram. Benefiting from the continuous representation provided by NeRF, the high-quality CT image can be reconstructed. However, existing NeRF-based SVCT methods strictly suppose there is completely no relative motion during the CT acquisition because they require \textit{accurate} projection poses to model the X-rays that scan the SV sinogram. Therefore, these methods suffer from severe performance drops for real SVCT imaging with motion. In this work, we propose a self-calibrating neural field to recover the artifacts-free image from the rigid motion-corrupted SV sinogram without using any external data. Specifically, we parametrize the inaccurate projection poses caused by rigid motion as trainable variables and then jointly optimize these pose variables and the MLP. We conduct numerical experiments on a public CT image dataset. The results indicate our model significantly outperforms two representative NeRF-based methods for SVCT reconstruction tasks with four different levels of rigid motion.
translated by 谷歌翻译
尽管促进机器学习(ML)公平的最新进展激增,但现有的主流方法主要需要培训或填充神经网络的整个权重以满足公平标准。但是,由于较大的计算和存储成本,低数据效率和模型隐私问题,对于那些大规模训练的模型来说,这通常是不可行的。在本文中,我们提出了一种称为FairreProgragr的新的通用公平学习范式,该范式结合了模型重编程技术。具体而言,Fairreprogrogram考虑了固定的神经模型,而是将输入一组扰动(称为公平触发器)附加到,该触发触发器在Min-Max公式下朝着公平标准调整为公平触发器。我们进一步介绍了一个信息理论框架,该框架解释了为什么以及在什么条件下,使用公平触发器可以实现公平目标。我们从理论和经验上都表明,公平触发器可以通过提供错误的人口统计信息来有效地掩盖固定ML模型的输出预测中的人口偏见,从而阻碍模型利用正确的人口统计信息来进行预测。对NLP和CV数据集进行的广泛实验表明,与在两个广泛使用的公平标准下,基于培训成本和数据依赖性的基于重新培训的方法相比,我们的方法可以实现更好的公平性改进。
translated by 谷歌翻译
深Q学习网络(DQN)是一种成功的方式,将增强学习与深神经网络结合在一起,并导致广泛应用强化学习。当将DQN或其他强化学习算法应用于现实世界问题时,一个具有挑战性的问题是数据收集。因此,如何提高数据效率是强化学习研究中最重要的问题之一。在本文中,我们提出了一个框架,该框架使用深q网络中的最大均值损失(m $^2 $ dqn)。我们没有在训练步骤中抽样一批体验,而是从体验重播中采样了几批,并更新参数,以使这些批次的最大td-Error最小化。所提出的方法可以通过替换损耗函数来与DQN算法的大多数现有技术结合使用。我们在几个健身游戏中使用了最广泛的技术DQN(DDQN)之一来验证该框架的有效性。结果表明,我们的方法会导致学习速度和性能的实质性提高。
translated by 谷歌翻译
纵向胎儿脑图集是理解和表征胎儿脑发育过程的复杂过程的强大工具。现有的胎儿脑图通常由离散时间点上的平均大脑图像构建,随着时间的流逝。由于样品在不同时间点的样品之间的遗传趋势差异,因此所得的地图遇到了时间不一致,这可能导致估计时间轴脑发育特征参数的误差。为此,我们提出了一个多阶段深度学习框架,以解决时间不一致问题作为4D(3D大脑体积 + 1D年龄)图像数据剥夺任务。使用隐式神经表示,我们构建了一个连续无噪声的纵向胎儿脑图集,这是4D时空坐标的函数。对两个公共胎儿脑图集(CRL和FBA-中心地图酶)的实验结果表明,所提出的方法可以显着提高Atlas时间一致性,同时保持良好的胎儿脑结构表示。另外,连续的纵向胎儿大脑图石也可以广泛应用于在空间和时间分辨率中生成更精细的4D图谱。
translated by 谷歌翻译
荧光显微镜是促进生物医学研究发现的关键驱动力。但是,随着显微镜硬件的局限性和观察到的样品的特征,荧光显微镜图像易受噪声。最近,已经提出了一些自我监督的深度学习(DL)denoising方法。但是,现有方法的训练效率和降解性能在实际场景噪声中相对较低。为了解决这个问题,本文提出了自我监督的图像denoising方法噪声2SR(N2SR),以训练基于单个嘈杂观察的简单有效的图像Denoising模型。我们的noings2SR Denoising模型设计用于使用不同维度的配对嘈杂图像进行训练。从这种训练策略中受益,Noige2SR更有效地自我监督,能够从单个嘈杂的观察结果中恢复更多图像细节。模拟噪声和真实显微镜噪声的实验结果表明,噪声2SR优于两个基于盲点的自我监督深度学习图像Denoising方法。我们设想噪声2SR有可能提高更多其他类型的科学成像质量。
translated by 谷歌翻译